Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            This study is part of a larger research project aimed at developing and implementing an NLP-enabled AI feedback tool called PyrEval to support middle school students’ science explanation writing. We explored how human-AI integrated classrooms can invite students to harness AI tools while still being agentic learners. Building on theory of new materialism with posthumanist perspectives, we examined teacher framing to see how the nature of PyrEval was communicated, thereby orienting students to partner with or rely on PyrEval. We analyzed one teacher’s talk in multiple classrooms as well as that of students in small groups. We found student agency was fostered through teacher framing of (a) PyrEval as a non-neutral actor and a co-investigator and (b) students’ participation as an author and their understanding of the nature of PyrEval as core task and purpose. Findings and implications are discussed.more » « lessFree, publicly-accessible full text available July 9, 2026
- 
            This study is part of a larger research project aimed at developing and implementing an NLP-enabled AI feedback tool called PyrEval to support middle school students’ science explanation writing. We explored how human-AI integrated classrooms can invite students to harness AI tools while still being agentic learners. Building on theory of new materialism with posthumanist perspectives, we examined teacher framing to see how the nature of PyrEval was communicated, thereby orienting students to partner with or rely on PyrEval. We analyzed one teacher’s talk in multiple classrooms as well as that of students in small groups. We found student agency was fostered through teacher framing of (a) PyrEval as a non-neutral actor and a co-investigator and (b) students’ participation as an author and their understanding of the nature of PyrEval as core task and purpose. Findings and implications are discussed.more » « lessFree, publicly-accessible full text available July 9, 2026
- 
            Automated feedback can provide students with timely information about their writing, but students' willingness to engage meaningfully with the feedback to revise their writing may be influenced by their perceptions of its usefulness. We explored the factors that may have influenced 339, 8th-grade students’ perceptions of receiving automated feedback on their writing and whether their perceptions impacted their revisions and writing improvement. Using HLM and logistic regression analyses, we found that: 1) students with more positive perceptions of the automated feedback made revisions that resulted in significant improvements in their writing, and 2) students who received feedback indicating they included more important ideas in their essays had significantly higher perceptions of the usefulness of the feedback, but were significantly less likely to engage in substantive revisions. Implications and the importance of helping students evaluate and reflect on the feedback to make substantive revisions, no matter their initial feedback, are discussedmore » « lessFree, publicly-accessible full text available June 9, 2026
- 
            As use of artificial intelligence (AI) has increased, concerns about AI bias and discrimination have been growing. This paper discusses an application called PyrEval in which natural language processing (NLP) was used to automate assessment and pro- vide feedback on middle school science writing with- out linguistic discrimination. Linguistic discrimination in this study was operationalized as unfair assess- ment of scientific essays based on writing features that are not considered normative such as subject- verb disagreement. Such unfair assessment is espe- cially problematic when the purpose of assessment is not assessing English writing but rather assessing the content of scientific explanations. PyrEval was implemented in middle school science classrooms. Students explained their roller coaster design by stat- ing relationships among such science concepts as potential energy, kinetic energy and law of conser- vation of energy. Initial and revised versions of sci- entific essays written by 307 eighth- grade students were analyzed. Our manual and NLP assessment comparison analysis showed that PyrEval did not pe- nalize student essays that contained non-normative writing features. Repeated measures ANOVAs and GLMM analysis results revealed that essay quality significantly improved from initial to revised essays after receiving the NLP feedback, regardless of non- normative writing features. Findings and implications are discussed.more » « lessFree, publicly-accessible full text available May 25, 2026
- 
            As use of artificial intelligence (AI) has increased, concerns about AI bias and discrimination have been growing. This paper discusses an application called PyrEval in which natural language processing (NLP) was used to automate assessment and pro- vide feedback on middle school science writing with- out linguistic discrimination. Linguistic discrimination in this study was operationalized as unfair assess- ment of scientific essays based on writing features that are not considered normative such as subject- verb disagreement. Such unfair assessment is espe- cially problematic when the purpose of assessment is not assessing English writing but rather assessing the content of scientific explanations. PyrEval was implemented in middle school science classrooms. Students explained their roller coaster design by stat- ing relationships among such science concepts as potential energy, kinetic energy and law of conser- vation of energy. Initial and revised versions of sci- entific essays written by 307 eighth- grade students were analyzed. Our manual and NLP assessment comparison analysis showed that PyrEval did not pe- nalize student essays that contained non-normative writing features. Repeated measures ANOVAs and GLMM analysis results revealed that essay quality significantly improved from initial to revised essays after receiving the NLP feedback, regardless of non- normative writing features. Findings and implications are discussed.more » « lessFree, publicly-accessible full text available May 25, 2026
- 
            Automated methods are becoming increasingly used to support formative feedback on students’ science explanation writing. Most of this work addresses students’ responses to short answer questions. We investigate automated feedback on students’ science explanation essays, which discuss multiple ideas. Feedback is based on a rubric that identifies the main ideas students are prompted to include in explanatory essays about the physics of energy and mass. We have found that students revisions generally improve their essays. Here, we focus on two factors that affect the accuracy of the automated feedback. First, learned representations of the six main ideas in the rubric differ with respect to their distinctiveness from each other, and therefore the ability of automated methods to identify them in student essays. Second, sometimes a student’s statement lacks sufficient clarity for the automated tool to associate it more strongly with one of the main ideas above all others.more » « less
- 
            Automated writing evaluation (AWE) systems automatically assess and provide students with feedback on their writing. Despite learning benefits, students may not effectively interpret and utilize AI-generated feedback, thereby not maximizing their learning outcomes. A closely related issue is the accuracy of the systems, that students may not understand, are not perfect. Our study investigates whether students differentially addressed false positive and false negative AI-generated feedback errors on their science essays. We found that students addressed nearly all the false negative feedback; however, they addressed less than one-fourth of the false positive feedback. The odds of addressing a false positive feedback was 99% lower than addressing a false negative feedback, representing significant missed opportunities for revision and learning. We discuss the implications of these findings in the context of students’ learning.more » « less
- 
            Hoadley, C; Wang, XC (Ed.)The present study examined teachers’ conceptualization of the role of AI in addressing inequity. Grounded in speculative design and education, we examined eight secondary public teachers’ thinking about AI in teaching and learning that may go beyond present horizons. Data were collected from individual interviews. Findings suggest that not only equity consciousness but also present engagement in a context of inequities were crucial to future dreaming of AI that does not harm but improve equity.more » « less
- 
            Hoadley, C; Wang, XC (Ed.)In this paper, we present a case study of designing AI-human partnerships in a realworld context of science classrooms. We designed a classroom environment where AI technologies, teachers and peers worked synergistically to support students’ writing in science. In addition to an NLP algorithm to automatically assess students’ essays, we also designed (i) feedback that was easier for students to understand; (ii) participatory structures in the classroom focusing on reflection, peer review and discussion, and (iii) scaffolding by teachers to help students understand the feedback. Our results showed that students improved their written explanations, after receiving feedback and engaging in reflection activities. Our case study illustrates that Augmented Intelligence (USDoE, 2023), in which the strengths of AI complement the strengths of teachers and peers, while also overcoming the limitations of each, can provide multiple forms of support to foster learning and teaching.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available